Evaluating Color Blindness Simulations
How well does each simulation model match my perception? What severity do I have?
Introduction
We've seen in Review of Open Source Color Blindness Simulations and Understanding LMS-based Color Blindness Simulations that there are several available models to simulate color vision Deficiencies (CVD).
But if I'm color blind, how do I know which one best fits my vision, and what severity factor I should pick? That'd be interesting to tune correction algorithms or to communicate with other people.
A person with CVD to validate a model is to make sure that the simulated images will appear similar to the original images.
Generated Ishihara-like plates to evaluate the kind and severity of CVD
It is possible to generate Ishihara-like plates for a given deficiency and severity factor. The main idea is to pick a confusion segment in the LMS space, and generate an Ishihara-like image by using one end of the segment as the background color and the other end as the foreground color (a single-digit number). On top of that lower degrees of severity can be tested by making the segment shorter, effectively reducing the distance between the colors along the confusion line. This makes it harder and harder to differentiate for someone with anomalous trichromacy.
By looking at these generated plates it becomes possible to self-evaluate the kind of deficiency by checking in which plate the numbers are the harder to read. For example I can read all the numbers in the tritan plate, most numbers in the deutan plate (but not all), and barely any in the protan plate. This confirms that I am a protan.
Then the severity can be evaluated by looking at plates generated with decreasing severity and checking at which value it becomes impossible to see any number. For me that's around 0.8 for the protan plate.
# This cell defines all our imports and a few utilities
import daltonlens.generate as generate
import daltonlens.simulate as simulate
import numpy as np
import plotly.express as px
import plotly.graph_objs as go
from plotly.subplots import make_subplots
# Utility for plotly imshow
hide_image_axes = dict(yaxis_visible=False, yaxis_showticklabels=False, xaxis_visible=False, xaxis_showticklabels=False, margin=dict(l=0, r=0, b=0, t=0))
def showAnimatedImages(simulator: simulate.Simulator, title: str, include_tritan: bool):
deficiencies = {
simulate.Deficiency.PROTAN: 'Protan',
simulate.Deficiency.DEUTAN: 'Deutan',
}
# Only Brettel is expected to give realistic Tritan simulation
if include_tritan:
deficiencies.update({ simulate.Deficiency.TRITAN: 'Tritan' })
images = []
for severity in np.arange(1.0, 0.09, -0.1):
severity_images = []
for deficiency, deficiency_name in deficiencies.items():
im = generate.simulator_ishihara_plate(simulator, deficiency, severity, f"{deficiency_name} - Severity {severity:.1f}")
severity_images.append (im)
images.append(np.vstack(severity_images))
images = np.stack(images, axis=0)
fig = px.imshow(images, height=704*1.4, animation_frame=0, title=title).update_layout(hide_image_axes).update_layout(margin=None)
fig.show()
showAnimatedImages(simulate.Simulator_Brettel1997(), 'Brettel 1997', include_tritan=True)
showAnimatedImages(simulate.Simulator_Vienot1999(), 'Viénot 1999', include_tritan=True)
showAnimatedImages(simulate.Simulator_Machado2009(), 'Machado 2009', include_tritan=True)